skip to main content


Search for: All records

Creators/Authors contains: "Borenstein, J."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Robots, humanoid and otherwise, are being created with the underlying motivation in many cases that they will either replace or complement activities performed by humans. It has been many years since robots were starting to be designed to take over “dull, dirty, or dangerous” tasks (e.g., Singer 2009). Over time, roboticists and others within computing communities have extended their ambitions to create technology that seeks to emulate more complex ranges of human-like behavior, potentially including the ability to participate in complicated conversations. Regardless of how sophisticated its functionality is, a robot should arguably be encoded with ethical decision-making parameters, especially if it is going to interact with or could potentially endanger a human being. Yet of course determining the nature and specification of such parameters raises many longstanding and difficult philosophical questions. 
    more » « less
    Free, publicly-accessible full text available May 16, 2024
  2. Our research team has been investigating methods for enabling robots to behave ethically while interacting with human beings. Our approach relies on two main sources of data for determining what counts as “ethical” behavior. The first are the views of average adults, which we refer to “folk morality”, and the second are the views of ethics experts. Yet the enterprise of identifying what should ground a robot’s decisions about ethical matters raises many fundamental metaethical questions. Here, we focus on one main metaethical question: would reason dedicate that it is more justifiable to base a robot’s decisions on folk morality or the guidance of ethics experts? The goal of this presentation is to highlight some of the arguments for and against each respective point of view, and the implications such arguments might have for the endeavor to encode ethical decision-making processes into robots. 
    more » « less
  3. As robots are becoming more intelligent and more commonly used, it is critical for robots to behave ethically in human-robot interactions. However, there is a lack of agreement on a correct moral theory to guide human behavior, let alone robots. This paper introduces a robotic architecture that leverages cases drawn from different ethical frameworks to guide the ethical decision-making process and select the appropriate robotic action based on the specific situation. We also present an architecture implementation design used on a pill sorting task for older adults, where the robot needs to decide if it is appropriate to provide false encouragement so that the adults continue to be engaged in the training task. 
    more » « less
  4. This paper examines the metaethical dimensions of the computing community’s efforts to program ethical decision- making abilities into robots. Arguments for and against that endeavor are outlined along with brief recommendations for the human-robot interaction realm. 
    more » « less
  5. Ethical decision-making is difficult, certainly for robots let alone humans. If a robot's ethical decision-making process is going to be designed based on some approximation of how humans operate, then the assumption is that a good model of how humans make an ethical choice is readily available. Yet no single ethical framework seems sufficient to capture the diversity of human ethical decision making. Our work seeks to develop the computational underpinnings that will allow a robot to use multiple ethical frameworks that guide it towards doing the right thing. As a step towards this goal, we have collected data investigating how regular adults and ethics experts approach ethical decisions related to the use in a healthcare and game playing scenario. The decisions made by the former group is intended to represent an approximation of a folk morality approach to these dilemmas. On the other hand, experts were asked to judge what decision would result if a person was using one of several different types of ethical frameworks. The resulting data may reveal which features of the pill sorting and game playing scenarios contribute to similarities and differences between expert and non-expert responses. This type of approach to programming a robot may one day be able to rely on specific features of an interaction to determine which ethical framework to use in the robot's decision making. 
    more » « less
  6. This paper describes current progress on developing an ethical architecture for robots that are designed to follow human ethical decision-making processes. We surveyed both regular adults (folks) and ethics experts (experts) on what they consider to be ethical behavior in two specific scenarios: pill-sorting with an older adult and game playing with a child. A key goal of the surveys is to better understand human ethical decision-making. In the first survey, folk responses were based on the subject’s ethical choices (“folk morality”); in the second survey, expert responses were based on the expert’s application of different formal ethical frameworks to each scenario. We observed that most of the formal ethical frameworks we included in the survey (Utilitarianism, Kantian Ethics, Ethics of Care and Virtue Ethics) and “folk morality” were conservative toward deception in the high-risk task with an older adult when both the adult and the child had significant performance deficiencies. 
    more » « less